near-optimal risk-sample tradeoff
Risk-Sensitive Reinforcement Learning: Near-Optimal Risk-Sample Tradeoff in Regret
We study risk-sensitive reinforcement learning in episodic Markov decision processes with unknown transition kernels, where the goal is to optimize the total reward under the risk measure of exponential utility. We propose two provably efficient model-free algorithms, Risk-Sensitive Value Iteration (RSVI) and Risk-Sensitive Q-learning (RSQ). These algorithms implement a form of risk-sensitive optimism in the face of uncertainty, which adapts to both risk-seeking and risk-averse modes of exploration.
Review for NeurIPS paper: Risk-Sensitive Reinforcement Learning: Near-Optimal Risk-Sample Tradeoff in Regret
Additional Feedback: The regret in Lemma 1 is also straightforward. The algorithmic idea is to minimizing the Bellman error. So algorithm novelty is Okay. The key contribution of the paper is the regret bound analysis of the two algorithms, which is mostly built on Taylor series approximation of the value function. Theorem 1 and 2 present the regret bound of the two algorithms, following an analysis similar to UCB, drawing from a Taylor expansion approximation of the (nonlinear) Bellman equation.
Risk-Sensitive Reinforcement Learning: Near-Optimal Risk-Sample Tradeoff in Regret
We study risk-sensitive reinforcement learning in episodic Markov decision processes with unknown transition kernels, where the goal is to optimize the total reward under the risk measure of exponential utility. We propose two provably efficient model-free algorithms, Risk-Sensitive Value Iteration (RSVI) and Risk-Sensitive Q-learning (RSQ). These algorithms implement a form of risk-sensitive optimism in the face of uncertainty, which adapts to both risk-seeking and risk-averse modes of exploration. In the above, \beta is the risk parameter of the exponential utility function, S the number of states, A the number of actions, T the total number of timesteps, and H the episode length. On the flip side, we establish a regret lower bound showing that the exponential dependence on \beta and H is unavoidable for any algorithm with an \tilde{O}(\sqrt{T}) regret (even when the risk objective is on the same scale as the original reward), thus certifying the near-optimality of the proposed algorithms.